DiscoverVoices from CEPS Ideas Lab18. AI liability in the EU: Who’s responsible when things go wrong?  Ft. Artur Bogucki
18. AI liability in the EU: Who’s responsible when things go wrong?  Ft. Artur Bogucki

18. AI liability in the EU: Who’s responsible when things go wrong?  Ft. Artur Bogucki

Update: 2025-05-22
Share

Description

The rapid development of AI has exposed significant gaps in liability laws, making it difficult to determine responsibility when AI systems cause harm. This is particularly challenging due to the "black box" nature of AI and the fragmented regulatory landscape across EU member states. One future solution may be the “Product Liability Directive”, set to take effect in 2026, which will shift the burden of proof onto AI companies and will require greater transparency. However, key issues like misinformation, discrimination, and intellectual property violations remain unaddressed, raising concerns about the need for a more comprehensive AI liability framework in the EU.  


Joining host Tom Parker is Artur Bogucki, Associate Researcher in the Global Governance, Regulation, Innovation and Digital Economy (GRID) unit at CEPS, to examine the challenges of AI regulation and discuss what policymakers need to do to ensure a clear, enforceable liability system that keeps pace with technological advancements.  


For further reading on CEPS’ research on AI regulation and liability, and Artur Bogucki’s work on digital governance and technology policy, follow this link.  


Watch this episode here


Hosted on Acast. See acast.com/privacy for more information.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

18. AI liability in the EU: Who’s responsible when things go wrong?  Ft. Artur Bogucki

18. AI liability in the EU: Who’s responsible when things go wrong?  Ft. Artur Bogucki